Introduction to Open Data Science - Course Project

About the project

I’m looking forward to taking part in Introduction to Open Data Science course. I hope to learn how to use open and free software like R to analyze data as well as to learn more about different statistical methods.

My GitHub repository

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Thu Nov 19 18:34:22 2020"

Exercise 2: Data analysis

date()
## [1] "Thu Nov 19 18:34:22 2020"

Data description

lrn14<-read.table("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/learning2014.txt", , sep=",", header=TRUE)

dim(lrn14)
## [1] 166   7
str(lrn14)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

The dataset lerning2014 includes 7 different variables and 166 observations. Observations represent students. Besides background variables gender and age, the dataset includes information how student scored in a exam (variable points) and their attidude towards statistics on a scale 1 to 5 (attitude). The variable deep is the average points on a scale 1 to 5 from questions concerning deep learning approaches. Variables stra and surf include information on average points from questions about strategic learning and surface learning approaches.

More information about the original dataset, that was used to form this dataset, can be found here

A graphical overview of the data and summary statistics

library(GGally)
## Loading required package: ggplot2
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
library(ggplot2)

p <- ggpairs(lrn14, mapping = aes(col =gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

p

summary(lrn14)
##     gender               age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

From the graph above we can see the distrubutions of different variables by gender. Below the graph there are summary statistics for different variables. The avereage age of study participants is 25.51 years. Majority of participants are women. Attitude variable has the strongest correlation with exam points and the scatter plot of these two variables also indicate that there may be some kind assocation between these variables.

Linear regression models

model1 <- lm(points ~ attitude + stra + surf, data = lrn14)

summary(model1)
## 
## Call:
## lm(formula = points ~ attitude + stra + surf, data = lrn14)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08
model2 <- lm(points ~ attitude , data = lrn14)

summary(model2)
## 
## Call:
## lm(formula = points ~ attitude, data = lrn14)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.6372     1.8303   6.358 1.95e-09 ***
## attitude      3.5255     0.5674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

We choose variables _attitude, stra and surf as independents variables to fit a linear regression model where points i.e. exam points is the dependent variable. The chosen independent variables were the most correlated variables with the dependent variable. From Model 1 summary we can see that only attitude has a statistically significant coefficient (3.3952) as its p-value <0.0001. With low p-value we can reject the null hypothesis that the coefficent is zero. When we fit Model 2 without stra and surf the coefficent for attitude is still significant and its value (3.5255) is close to that in Model 1.

The multiple R-squared tells us how much our models explain the variation in points. Multiple R-squared are quite close to each other in both models (Model 1 0.2074 and Model 2 0.1906 ) so roughly both models explain approx. 20% of the varioation in points. In Model 2 with a single independent variable the miltiple R-squared is basically the square of correlation coefficient between the independent and the dependent variable.

Diagnostic plots

par(mfrow =c(2,2))
plot(model2, which = c(1,2,5) )

From the QQ plot we can see that the residuals reasonable well fall in with the line which would indicate that the assumption of normality of error terms are not violated.

From the plot where residuals and fitted values from the model are plotted we can see that there is no clear pattern. The size of error term does not depend on the independent variables which indicates that the constant variance of error term assumption is not violated.

The leverage plot helps us to estimate if there are influential outliers in our data that affect out estimation results. From our leverage plot we see that there are no influential outliers, the x-axis scale is very low.

Exercise 3: Logistic regression

date()
## [1] "Thu Nov 19 18:34:29 2020"

Data description

library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
setwd("~/R/IODS-project")
pormat<-read.csv("data/pormat.csv", header=T)

str(pormat)
## 'data.frame':    370 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  15 15 15 15 15 15 15 15 15 15 ...
##  $ address   : chr  "R" "R" "R" "R" ...
##  $ famsize   : chr  "GT3" "GT3" "GT3" "GT3" ...
##  $ Pstatus   : chr  "T" "T" "T" "T" ...
##  $ Medu      : int  1 1 2 2 3 3 3 2 3 3 ...
##  $ Fedu      : int  1 1 2 4 3 4 4 2 1 3 ...
##  $ Mjob      : chr  "at_home" "other" "at_home" "services" ...
##  $ Fjob      : chr  "other" "other" "other" "health" ...
##  $ reason    : chr  "home" "reputation" "reputation" "course" ...
##  $ guardian  : chr  "mother" "mother" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 2 1 2 2 2 1 ...
##  $ studytime : int  4 2 1 3 3 3 3 2 4 4 ...
##  $ schoolsup : chr  "yes" "yes" "yes" "yes" ...
##  $ famsup    : chr  "yes" "yes" "yes" "yes" ...
##  $ activities: chr  "yes" "no" "yes" "yes" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ internet  : chr  "yes" "yes" "no" "yes" ...
##  $ romantic  : chr  "no" "yes" "no" "no" ...
##  $ famrel    : int  3 3 4 4 4 4 4 4 4 4 ...
##  $ freetime  : int  1 3 3 3 2 3 2 1 4 3 ...
##  $ goout     : int  2 4 1 2 1 2 2 3 2 3 ...
##  $ Dalc      : int  1 2 1 1 2 1 2 1 2 1 ...
##  $ Walc      : int  1 4 1 1 3 1 2 3 3 1 ...
##  $ health    : int  1 5 2 5 3 5 5 4 3 4 ...
##  $ failures  : int  0 1 0 0 1 0 1 0 0 0 ...
##  $ paid      : chr  "yes" "no" "no" "no" ...
##  $ absences  : int  3 2 8 2 5 2 0 1 9 10 ...
##  $ G1        : int  10 10 14 10 12 12 11 10 16 10 ...
##  $ G2        : int  12 8 13 10 12 12 6 10 16 10 ...
##  $ G3        : int  12 8 12 9 12 12 6 10 16 10 ...
##  $ alc_use   : num  1 3 1 1 2.5 1 2 2 2.5 1 ...
##  $ high_use  : logi  FALSE TRUE FALSE FALSE TRUE FALSE ...

The data set pormat includes 35 different variables and 370 observations. The data set has been constructed by joining two data sets that include information on students from two Portuguese schools. The two original data sets included information on students performance in mathematics and Portuguese language. Variables G1-G3 are the grades and when data sets were joined the averages of grades of these two subjects were calculated to create grades variables in the joined data set. Also averages of the numbers absences and failures from these two subject were calculated when forming the joined data set. paid variable contains information if the student got paid extra classes in the subject, in the joined data set paid variable represent the answer to the question if the student got paid classes in Portuguese language. Other variables are background variables such as age, sex, parents’ level of education etc.

When the joined data set was constructed, two new variables were created alc_use which is the average of workday (Dalc) and weekend (Walc) alcohol consumption and high_use which is binary variable describing if the average alcohol consumption was higher than 2.

The two original datasets and the detail description of all variables in the original data sets are available here.

Variables that may be associated with alcohol consumption

Absences: Students who have a lot of absences from class may also have a high alcohol consumption as absences may reflect the fact that student skip school because they feel too hangover.

Failures: A high alcohol consumption may harm school performance and student who consume high amount alcohol may not pass exams as often as other students.

Sex: It is a quite common known fact that men consume more alcohol than women.

PStatus (parent’s cohabitation status): Students whose parents live apart may consume more alcohol because their single parents possibly have to work a lot to provide for the family and can’t spent time to supervise what children are doing or a high alcohol consumption may also be a child’s reaction to parents’ separation.

The distributions of the variables and their relationships with alcohol consumption

library(tidyr); library(dplyr); library(ggplot2); library(gmodels)

vars<-subset(pormat, select= c("failures","sex","absences", "Pstatus", "alc_use", "high_use"))

# draw a bar plot of each variable
gather(vars) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free")+geom_bar()

g1<- ggplot(pormat, aes(x=high_use, y=absences,))
g1 + geom_boxplot() + ylab("absences")+ggtitle("Student absences by alcohol consumption and sex")

table(high_use = pormat$high_use, failures = pormat$failures)
##         failures
## high_use   0   1   2   3
##    FALSE 238  12   8   1
##    TRUE   87  12   9   3
CrossTable(pormat$high_use, pormat$sex, prop.c=TRUE, prop.r=F, prop.chisq=F, prop.t=F)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## |           N / Col Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##                 | pormat$sex 
## pormat$high_use |         F |         M | Row Total | 
## ----------------|-----------|-----------|-----------|
##           FALSE |       154 |       105 |       259 | 
##                 |     0.790 |     0.600 |           | 
## ----------------|-----------|-----------|-----------|
##            TRUE |        41 |        70 |       111 | 
##                 |     0.210 |     0.400 |           | 
## ----------------|-----------|-----------|-----------|
##    Column Total |       195 |       175 |       370 | 
##                 |     0.527 |     0.473 |           | 
## ----------------|-----------|-----------|-----------|
## 
## 
CrossTable(pormat$high_use, pormat$Pstatus, prop.c=TRUE, prop.r=F, prop.chisq=F, prop.t=F)
## 
##  
##    Cell Contents
## |-------------------------|
## |                       N |
## |           N / Col Total |
## |-------------------------|
## 
##  
## Total Observations in Table:  370 
## 
##  
##                 | pormat$Pstatus 
## pormat$high_use |         A |         T | Row Total | 
## ----------------|-----------|-----------|-----------|
##           FALSE |        26 |       233 |       259 | 
##                 |     0.684 |     0.702 |           | 
## ----------------|-----------|-----------|-----------|
##            TRUE |        12 |        99 |       111 | 
##                 |     0.316 |     0.298 |           | 
## ----------------|-----------|-----------|-----------|
##    Column Total |        38 |       332 |       370 | 
##                 |     0.103 |     0.897 |           | 
## ----------------|-----------|-----------|-----------|
## 
## 

From the first plot we can see the distributions of our chosen variables. For example, most students’ parents live together and there are more women in our sample than men.

From the second graph we see that the median number of absences is higher among those who consume high amounts of alcohol and also limits for IQR are higher.

From the first table we see that there are more high consumers among those who have several failures in exams.

From the second table we see that 40% of men and 21% of women are have high alcohol consumption.

From the third table we see that 31% of students who parents live apart and 30% of students who parents live together are high consumers.

A logistic regression model

m1 <- glm(high_use ~ failures + absences + sex + Pstatus, data = pormat, family = "binomial")

# print out a summary of the model
summary(m1)
## 
## Call:
## glm(formula = high_use ~ failures + absences + sex + Pstatus, 
##     family = "binomial", data = pormat)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1317  -0.8455  -0.5912   1.0272   2.0345  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -2.02222    0.43765  -4.621 3.83e-06 ***
## failures     0.59686    0.20720   2.881  0.00397 ** 
## absences     0.09301    0.02336   3.982 6.84e-05 ***
## sexM         0.99666    0.24726   4.031 5.56e-05 ***
## PstatusT     0.08764    0.40210   0.218  0.82746    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 406.95  on 365  degrees of freedom
## AIC: 416.95
## 
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m1)
## (Intercept)    failures    absences        sexM    PstatusT 
## -2.02221996  0.59686177  0.09301028  0.99666359  0.08764120
# compute odds ratios (OR)
OR <- coef(m1) %>% exp

# compute confidence intervals (CI)
CI<-confint(m1)%>%exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR      2.5 %    97.5 %
## (Intercept) 0.1323613 0.05365585 0.3015602
## failures    1.8164095 1.21846564 2.7632268
## absences    1.0974730 1.05068550 1.1517051
## sexM        2.7092276 1.67857047 4.4338543
## PstatusT    1.0915964 0.50797217 2.4883339

From the summary of our model we can see that failures, absences, and sex variables have a statistically significant coefficients (p<0.001) but Psatus variable do not have a significant coefficient. From odds ratios we can see that men have 2.71 or according to confidence intervals from 1.68 to 4.43 times higher odds of high alcohol consumption than women. Number of failures and absences increase the odds of high alcohol consumption by 81.6% and 9.7% respectively for every one unit increase in the value of the variable. From the confidence interval of Pstatus we can make the same conclusion as from the p-value. The CI includes the value one which indicates that the OR is not statistically significant. To conclude failures, absences and sex seem to be associated with a high alcohol consumption as hypothezized previously but parent’s cohabitation status is not associated with a high alcohol consumption.

The predictive power of the model

m2 <- glm(high_use ~ failures + absences + sex , data = pormat, family = "binomial")

# predict() the probability of high_use
probabilities <- predict(m2, type = "response")

# add the predicted probabilities to data
pormat <- mutate(pormat, probability = probabilities)

# use the probabilities to make a prediction of high_use
pormat <- mutate(pormat, prediction = probability>0.5)

# tabulate the target variable versus the predictions
table(high_use = pormat$high_use, prediction = pormat$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   252    7
##    TRUE     78   33
table(high_use = pormat$high_use, prediction = pormat$prediction)%>%prop.table()%>%addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.68108108 0.01891892 0.70000000
##    TRUE  0.21081081 0.08918919 0.30000000
##    Sum   0.89189189 0.10810811 1.00000000
# plot the target variable versus the predictions
g <- ggplot(pormat, aes(x = high_use, y = probability))
g+geom_point()

loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = pormat$high_use, prob = pormat$probability)
## [1] 0.2297297

As Pstatus was not statistically significant we leave it out from the final model. Our final model predicted 252 FALSE cases out of 259 right as for TRUE cases only 33 out of 111 was predicted right. The total proportion of inaccurately classified individuals was 23%.

10-Fold cross validation

library(boot)
cv <- cv.glm(data = pormat, cost = loss_func, glmfit = m2, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2486486

According the 10-fold cross validation out model has about 0.24 error. That is a bit smaller than in the DataCamp model although our final model is exactly the same. However our data has fewer observations so the comparisons is not so straightforward.


Exercise 4: Clustering and classification

date()
## [1] "Thu Nov 19 18:34:31 2020"
library(tidyr); library(dplyr); library(ggplot2); library(gmodels)

Data description

library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
# load the data
data("Boston")

# explore the dataset
dim(Boston)
## [1] 506  14
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The Boston data set has 506 observations and 14 variables. The data include information on housing values in suburbs of Boston. Full list of variables and descriptions of variables are available here

Overview of the data

pairs(Boston)

pairs(Boston[c("crim","age","dis","rad","tax","black","medv")])

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

From the graphs we can see the distributions of variables and their relationship with other variables. When we take a closer look some selected variables, we see that for example crime rates are higher in areas where there are larger proportions of older houses or in areas where the mean of distances to employment centers is shorter.

Standartizatoin of data, creating test and train data sets

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# change the object to data frame
boston_scaled<-as.data.frame(boston_scaled)

# create a quantile vector of crim 
bins <- quantile(boston_scaled$crim)

# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# remove the crime variable from the data set
boston_scaled <- dplyr::select(boston_scaled, -crim)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)


# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

After standardization, we can see from the summary table that all variables now have the mean zero and the minimum and maximum values of all scaled variables varies in much smaller intervals than in the original data.

After creating a categorical variable for crime rates, we can see from the table that observations are quite equally distributed in four categories.

Linear discriminant analysis

# linear discriminant analysis
lda.fit <- lda(crime~., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2425743 0.2500000 0.2425743 0.2648515 
## 
## Group means:
##                  zn      indus        chas        nox         rm        age
## low       0.9603007 -0.8877158 -0.11163110 -0.8678412  0.4322590 -0.8570834
## med_low  -0.1098365 -0.2463064 -0.07742312 -0.5301939 -0.1853758 -0.2457814
## med_high -0.3804849  0.1637265  0.08924136  0.4063825 -0.0191873  0.4237528
## high     -0.4872402  1.0170108 -0.05155709  1.0770807 -0.3833050  0.8059241
##                 dis        rad        tax      ptratio       black       lstat
## low       0.8302295 -0.6865511 -0.7608232 -0.444665440  0.38499661 -0.77516658
## med_low   0.3601183 -0.5486376 -0.4587498  0.005447313  0.32135485 -0.07266494
## med_high -0.3449008 -0.4111534 -0.3038295 -0.288654539  0.07858534  0.08529259
## high     -0.8500432  1.6392096  1.5148289  0.782035629 -0.82427597  0.88654355
##                 medv
## low       0.52854368
## med_low  -0.06266871
## med_high  0.08685629
## high     -0.71508722
## 
## Coefficients of linear discriminants:
##                  LD1         LD2         LD3
## zn       0.094109369  0.90085820 -0.81734309
## indus   -0.035554476 -0.15920051  0.32687687
## chas     0.025747812 -0.09912745  0.03586020
## nox      0.459379374 -0.63925638 -1.37302914
## rm       0.049755854  0.01301499 -0.13551227
## age      0.247839076 -0.34734052 -0.04417047
## dis     -0.081490631 -0.46593622  0.32020903
## rad      3.157275451  0.98636087 -0.04463778
## tax      0.005525413 -0.21033169  0.59500414
## ptratio  0.153101786  0.07396940 -0.16671214
## black   -0.129431041  0.02863432  0.14176732
## lstat    0.194440033 -0.29804647  0.37712029
## medv     0.065744528 -0.41702050 -0.15389035
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9526 0.0354 0.0120
# target classes as numeric
classes <- as.numeric(train$crime)

#plot LDA biplot
plot(lda.fit, dimen = 2, col = classes, pch = classes)

From the output of our LDA model, we see that the first linear discriminant explain 94.5% of between group variance. This can be also seen from the plot where along the x axis (LD1) the separation of different groups is somewhat clearer, especially for the high category, than along the y axis (LD2).

Predictions

# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17      12        0    0
##   med_low    7      16        2    0
##   med_high   1       6       20    1
##   high       0       0        0   20

From the table we see that our LDA model did fairly good job as the majority of predictions were correct in most categories. The med low category was the most hardest to predict right.

K-means clustering

set.seed(123)
library(MASS); library(ggplot2)
data("Boston")

# center and standardize variables
boston_scaled <- scale(Boston)
boston_scaled<-as.data.frame(boston_scaled)

# euclidean distance matrix
dist_eu <- dist(boston_scaled)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# k-means clustering: 
km <-kmeans(boston_scaled, centers = 2)

pairs(boston_scaled, col = km$cluster)

pairs(boston_scaled[6:10], col = km$cluster)

# Investigate the optimal number of clusters with the total of within cluster sum of squares (tWCSS)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering: 3 clusters
km <-kmeans(boston_scaled, centers = 3)

pairs(boston_scaled, col = km$cluster)

pairs(boston_scaled[6:10], col = km$cluster)

When we first run K-means clustering with two clusters we can see for example rad and tax variables seem have a clear effect on the clustering results. When we investigate the optimal nubmer of clusters, three clusters seem to be enough according the total of within cluster sum of squares. With three clusters again rad and tax seem to affect the clustering results.

Bonus

library(MASS)
data("Boston")
set.seed(123)

# center and standardize variables
boston_scaled <- scale(Boston)
boston_scaled<-as.data.frame(boston_scaled)

km <-kmeans(boston_scaled, centers = 3)


boston_scaled1<-data.frame(boston_scaled, km$cluster)

# linear discriminant analysis
lda.fit2 <- lda(km.cluster~., data = boston_scaled1)

# print the lda.fit object
lda.fit2
## Call:
## lda(km.cluster ~ ., data = boston_scaled1)
## 
## Prior probabilities of groups:
##         1         2         3 
## 0.2806324 0.3992095 0.3201581 
## 
## Group means:
##         crim         zn        indus        chas         nox         rm
## 1  0.9693718 -0.4872402  1.074440092 -0.02279455  1.04197430 -0.4146077
## 2 -0.3549295 -0.4039269  0.009294842  0.11748284  0.01531993 -0.2547135
## 3 -0.4071299  0.9307491 -0.953383032 -0.12651054 -0.93243813  0.6810272
##          age        dis        rad        tax     ptratio      black
## 1  0.7666895 -0.8346743  1.5010821  1.4852884  0.73584205 -0.7605477
## 2  0.3096462 -0.2267757 -0.5759279 -0.4964651 -0.09219308  0.2473725
## 3 -1.0581385  1.0143978 -0.5976310 -0.6828704 -0.53004055  0.3582008
##         lstat       medv
## 1  0.85963373 -0.6874933
## 2  0.09168925 -0.1052456
## 3 -0.86783467  0.7338497
## 
## Coefficients of linear discriminants:
##                 LD1         LD2
## crim     0.03654114  0.20373943
## zn      -0.08346821  0.34784463
## indus   -0.32262409 -0.12105014
## chas    -0.04761479 -0.13327215
## nox     -0.13026254  0.15610984
## rm       0.13267423  0.44058946
## age     -0.11936644 -0.84880847
## dis      0.23454618  0.58819732
## rad     -1.96894437  0.57933028
## tax     -1.10861600  0.53984421
## ptratio -0.13087741 -0.02004405
## black    0.15432491 -0.06106305
## lstat   -0.14002173  0.14786473
## medv     0.02559139  0.37307811
## 
## Proportion of trace:
##    LD1    LD2 
## 0.8999 0.1001
# target classes as numeric
classes <- as.numeric(boston_scaled1$km.cluster)

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

#plot LDA biplot
plot(lda.fit2, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit2, myscale = 3)

From the plot we can see that rad and tax varaiables are mostly correlated with the first linear discriminant and dis and age with the second linear discriminant.

Super-bonus

set.seed(123)
model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

#Next, install and access the plotly package. Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.

library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
## 
##     select
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
classes <- as.numeric(train$crime)

plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color=classes)
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.

I didn’t figure out anymore how to add clusters as colors.